Conversation
Added papers: - Kareus: Energy optimization in large model training - AXLearn: Modular training on heterogeneous infrastructure (MLSys' 26) - SuperInfer: SLO-aware scheduling for LLM inference (MLSys' 26) - Scaling Up Small LMs: Efficient serving and deployment (MLSys' 26) - BestServe: Serving strategies with optimal goodput - OptiKIT: Automated enterprise LLM optimization (MLSys' 26) Co-authored-by: Jiachen LIU <AmberLJC@users.noreply.github.com>
Pull Request ReviewThank you for the contribution! I've reviewed PR #52 and overall the changes look good. Here's my detailed feedback: ✅ Strengths
🔍 Minor Observations
📋 Code Quality AssessmentSince this is a documentation-only change:
🔗 URL VerificationI recommend verifying these arXiv links are accessible:
✨ Overall AssessmentStatus: ✅ Approved (Post-merge review) This is a high-quality contribution that adds valuable recent papers to the repository. The formatting is exemplary and follows all guidelines. The minor observations above are suggestions for potential future improvements but do not detract from the quality of this PR. Great work maintaining the repository's high standards! 🎉 Review generated via Claude Code |
Added 6 new papers from the 1/29 submission:
Training papers:
Serving papers:
All entries follow the repository's formatting guidelines.
Resolves #51
Generated with Claude Code
Note
Low Risk
Documentation-only changes (new links/entries in
README.md) with no code or runtime impact; main risk is broken/incorrect citations or formatting.Overview
Adds six new paper entries to
README.mdto expand the curated LLM systems list.Two papers are added under Training / Pre-training (
Kareus,AXLearn) and four under Serving / LLM serving (SuperInfer,Scaling Up Efficient Small Language Models Serving,BestServe,OptiKIT).Written by Cursor Bugbot for commit 19fd6d3. This will update automatically on new commits. Configure here.